-
Notifications
You must be signed in to change notification settings - Fork 1.6k
✨ (go/v4): Add optional kubectl context locking for e2e tests #5336
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
✨ (go/v4): Add optional kubectl context locking for e2e tests #5336
Conversation
|
Skipping CI for Draft Pull Request. |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: camilamacedo86 The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
049342e to
aa0c6e4
Compare
e664caa to
c8fa24a
Compare
|
@mandarjog could you please give a look on this one? |
c8fa24a to
1df6ec6
Compare
E2E tests now support optional context locking via KUBE_CONTEXT environment variable. When set, tests validate and lock to the specified kubectl context, preventing context switching during execution. Features: - Display current kubectl context at test startup - Validate context matches KUBE_CONTEXT if set - Add --context flag to all kubectl commands when locked - 100% backward compatible (opt-in feature) Usage: KUBE_CONTEXT=kind-test make test-e2e This addresses feedback from PR kubernetes-sigs#5329 about preventing inadvertent context switching during test execution, while maintaining ease of use (works without any env vars by default). Assisted-by: Cursor
1df6ec6 to
cd07c62
Compare
|
/test pull-kubebuilder-e2e-k8s-1-34-0 |
vitorfloriano
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doesn't kubectl validate available contexts based on kubeconfig?
I mean, if we only pass KUBE_CONTEXT to the --context flag in the command without setting this new context first in kubeconfig using kubectl config set-context first, wouldn't that cause an error?
That’s a trade-off. The concern is that this would change the end user’s local config. By default, we wouldn’t undo/reset it after the tests, right? If so, I don’t think we should be invasive and modify local configs implicitly. |
| # - CERT_MANAGER_INSTALL_SKIP=true | ||
| # Environment variables: | ||
| # - KIND_CLUSTER: Name of the Kind cluster (default: {{ .ProjectName }}-test-e2e) | ||
| # - KUBE_CONTEXT: Kubectl context to use (default: current-context) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I ran make test-e2e without passing the KUBE_CONTEXT env var and got this:
@vitorfloriano ➜ /workspaces/test-operator $ make test-e2e
No kind clusters found.
Creating Kind cluster 'test-operator-test-e2e'...
Creating cluster "test-operator-test-e2e" ...
✓ Ensuring node image (kindest/node:v1.35.0) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-test-operator-test-e2e"
You can now use your cluster with:
kubectl cluster-info --context kind-test-operator-test-e2e
Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂
mkdir -p "/workspaces/test-operator/bin"
Downloading sigs.k8s.io/controller-tools/cmd/[email protected]
"/workspaces/test-operator/bin/controller-gen" rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
"/workspaces/test-operator/bin/controller-gen" object:headerFile="hack/boilerplate.go.txt" paths="./..."
go fmt ./...
go vet ./...
KIND=kind KIND_CLUSTER=test-operator-test-e2e go test -tags=e2e ./test/e2e/ -v -ginkgo.v
So I believe the default KUBE_CONTEXT would be kind-{{ .ProjectName }}-test-e2e, right?
| # - KUBE_CONTEXT: Kubectl context to use (default: current-context) | |
| # - KUBE_CONTEXT: Kubectl context to use (default: kind-{{ .ProjectName }}-test-e2e) |
| // Environment variables (see Makefile target 'test-e2e' for examples): | ||
| // - KIND_CLUSTER: Name of the Kind cluster (default: kind) | ||
| // - KUBE_CONTEXT: Kubectl context to use (default: current-context) | ||
| // - CERT_MANAGER_INSTALL_SKIP=true: Skip CertManager installation | ||
| // | ||
| // Note: When KIND_CLUSTER=my-cluster, the kubectl context will be "kind-my-cluster" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't this be the exact same as in the Makefile?
| // Environment variables (see Makefile target 'test-e2e' for examples): | |
| // - KIND_CLUSTER: Name of the Kind cluster (default: kind) | |
| // - KUBE_CONTEXT: Kubectl context to use (default: current-context) | |
| // - CERT_MANAGER_INSTALL_SKIP=true: Skip CertManager installation | |
| // | |
| // Note: When KIND_CLUSTER=my-cluster, the kubectl context will be "kind-my-cluster" | |
| // Environment variables: | |
| // - KIND_CLUSTER: Name of the Kind cluster (default: {{ .ProjectName }}-test-e2e) | |
| // - KUBE_CONTEXT: Kubectl context to use (default: kind-{{ .ProjectName }}-test-e2e) | |
| // - CERT_MANAGER_INSTALL_SKIP=true: Skip CertManager installation | |
| // | |
| // Note: When KIND_CLUSTER=my-cluster, the kubectl context will be "kind-my-cluster" |
| ServiceAccount: fmt.Sprintf("e2e-%s-controller-manager", testSuffix), | ||
| CmdContext: cc, | ||
| // Optional context lock from env var | ||
| KubeContext: os.Getenv("KUBE_CONTEXT"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I created a new cluster (my-beloved-cluster) and and then ran KUBE_CONTEXT=kind-my-beloved-cluster make test-e2e and got this:
@vitorfloriano ➜ /workspaces/test-operator $ kind create cluster --name my-beloved-cluster
Creating cluster "my-beloved-cluster" ...
✓ Ensuring node image (kindest/node:v1.35.0) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-my-beloved-cluster"
You can now use your cluster with:
kubectl cluster-info --context kind-my-beloved-cluster
Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂
@vitorfloriano ➜ /workspaces/test-operator $ KUBE_CONTEXT=kind-my-beloved-cluster make test-e2e
Creating Kind cluster 'test-operator-test-e2e'...
Creating cluster "test-operator-test-e2e" ...
✓ Ensuring node image (kindest/node:v1.35.0) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-test-operator-test-e2e"
You can now use your cluster with:
kubectl cluster-info --context kind-test-operator-test-e2e
Thanks for using kind! 😊
"/workspaces/test-operator/bin/controller-gen" rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
"/workspaces/test-operator/bin/controller-gen" object:headerFile="hack/boilerplate.go.txt" paths="./..."
go fmt ./...
go vet ./...
KIND=kind KIND_CLUSTER=test-operator-test-e2e go test -tags=e2e ./test/e2e/ -v -ginkgo.v
=== RUN TestE2E
Starting test-operator e2e test suite
Running Suite: e2e suite - /workspaces/test-operator/test/e2e
=============================================================
Random Seed: 1769783531
Will run 2 of 2 specs
------------------------------
[BeforeSuite]
/workspaces/test-operator/test/e2e/e2e_suite_test.go:57
Using context: kind-my-beloved-cluster
STEP: building the manager image @ 01/30/26 14:32:11.868
running: "make docker-build IMG=example.com/test-operator:v0.0.1"
Kind still checked for the default {{ .ProjectName }}-test-e2e cluster and created it, instead of using my-beloved-cluster.
It also passed KIND_CLUSTER=test-operator-test-e2e to go test, instead of passing KIND_CLUSTER=my-beloved-cluster.
But then, before each test, it used the context that was passed, kind-my-beloved-cluster.
So it seems we should either fix the kind cluster creation logic for e2e tests or just adopt the context that is already created by default by kind and lock that one to kubectl --context. WDYT?
| var _ = BeforeSuite(func() { | ||
| // Display kubectl context being used | ||
| if kubectx := os.Getenv("KUBE_CONTEXT"); kubectx != "" { | ||
| _, _ = fmt.Fprintf(GinkgoWriter, "Using context: %s\n", kubectx) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I ran KUBE_CONTEXT=kind-inexistent-cluster make test-e2e to check if the test suite would fail but it seems that we are not checking if the context is valid before running the tests. And again, kind created the default test cluster, ignoring KUBE_CONTEXT:
@vitorfloriano ➜ /workspaces/test-operator $ KUBE_CONTEXT=kind-inexistent-cluster make test-e2e
Creating Kind cluster 'test-operator-test-e2e'...
Creating cluster "test-operator-test-e2e" ...
✓ Ensuring node image (kindest/node:v1.35.0) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-test-operator-test-e2e"
You can now use your cluster with:
kubectl cluster-info --context kind-test-operator-test-e2e
Not sure what to do next? 😅 Check out https://kind.sigs.k8s.io/docs/user/quick-start/
"/workspaces/test-operator/bin/controller-gen" rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
"/workspaces/test-operator/bin/controller-gen" object:headerFile="hack/boilerplate.go.txt" paths="./..."
go fmt ./...
go vet ./...
KIND=kind KIND_CLUSTER=test-operator-test-e2e go test -tags=e2e ./test/e2e/ -v -ginkgo.v
=== RUN TestE2E
Starting test-operator e2e test suite
Running Suite: e2e suite - /workspaces/test-operator/test/e2e
=============================================================
Random Seed: 1769785208
Will run 2 of 2 specs
------------------------------
[BeforeSuite]
/workspaces/test-operator/test/e2e/e2e_suite_test.go:57
Using context: kind-inexistent-cluster
...
...
...
[AfterSuite] PASSED [13.291 seconds]
------------------------------
Ran 2 of 2 Specs in 98.106 seconds
SUCCESS! -- 2 Passed | 0 Failed | 0 Pending | 0 Skipped
--- PASS: TestE2E (98.11s)
PASS
ok myoperator/my-operator/test/e2e 98.118s
make cleanup-test-e2e
make[1]: Entering directory '/workspaces/test-operator'
Deleting cluster "test-operator-test-e2e" ...
Deleted nodes: ["test-operator-test-e2e-control-plane"]
make[1]: Leaving directory '/workspaces/test-operator'
|
|
||
| // getCurrentContext returns the current kubectl context name | ||
| func getCurrentContext() (string, error) { | ||
| cmd := exec.Command("kubectl", "config", "current-context") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Couldn't we simply extract the context that is create by default by kind when running the e2e test (kind-{{ .ProjectName }}-test-e2e) and assign it to KUBE_CONTEXT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That would make the test fail if I just create kind and run the tests via IDE, for example
| func (k *Kubectl) cmdOptionsWithContext(cmdOptions ...string) []string { | ||
| if k.KubeContext != "" { | ||
| return append([]string{"--context", k.KubeContext}, cmdOptions...) | ||
| } | ||
| return cmdOptions |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One more thing: I created the new cluster (my-beloved-cluster), checked that it was the current context, and ran KUBE_CONTEXT=kind-my-beloved-cluster make test-e2e and noticed that --context kind-my-beloved-cluster is not appended to the commands in the logs, so I believe all the commands are still being run against the default context that kind creates for the test, not the one that was informed:
@vitorfloriano ➜ /workspaces/test-operator $ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
kind-kind kind-kind kind-kind
* kind-my-beloved-cluster kind-my-beloved-cluster kind-my-beloved-cluster
kind-my-other-cluster kind-my-other-cluster kind-my-other-cluster
@vitorfloriano ➜ /workspaces/test-operator $ kubectl config current-context
kind-my-beloved-cluster
@vitorfloriano ➜ /workspaces/test-operator $ KUBE_CONTEXT=kind-my-beloved-cluster make test-e2e
Creating Kind cluster 'test-operator-test-e2e'...
Creating cluster "test-operator-test-e2e" ...
✓ Ensuring node image (kindest/node:v1.35.0) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-test-operator-test-e2e"
You can now use your cluster with:
kubectl cluster-info --context kind-test-operator-test-e2e
Not sure what to do next? 😅 Check out https://kind.sigs.k8s.io/docs/user/quick-start/
"/workspaces/test-operator/bin/controller-gen" rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
"/workspaces/test-operator/bin/controller-gen" object:headerFile="hack/boilerplate.go.txt" paths="./..."
go fmt ./...
go vet ./...
KIND=kind KIND_CLUSTER=test-operator-test-e2e go test -tags=e2e ./test/e2e/ -v -ginkgo.v
=== RUN TestE2E
Starting test-operator e2e test suite
Running Suite: e2e suite - /workspaces/test-operator/test/e2e
=============================================================
Random Seed: 1769800186
Will run 2 of 2 specs
------------------------------
[BeforeSuite]
/workspaces/test-operator/test/e2e/e2e_suite_test.go:57
Using context: kind-my-beloved-cluster
STEP: building the manager image @ 01/30/26 19:09:46.913
running: "make docker-build IMG=example.com/test-operator:v0.0.1"
STEP: loading the manager image on Kind @ 01/30/26 19:09:50.887
running: "kind load docker-image example.com/test-operator:v0.0.1 --name test-operator-test-e2e"
STEP: checking if CertManager is already installed @ 01/30/26 19:10:03.03
running: "kubectl get crds"
STEP: installing CertManager @ 01/30/26 19:10:03.977
running: "kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.19.2/cert-manager.yaml"
running: "kubectl wait deployment.apps/cert-manager-webhook --for condition=Available --namespace cert-manager --timeout 5m"
[BeforeSuite] PASSED [46.131 seconds]
------------------------------
Manager Manager should run successfully
/workspaces/test-operator/test/e2e/e2e_test.go:144
STEP: creating manager namespace @ 01/30/26 19:10:33.053
running: "kubectl create ns test-operator-system"
STEP: labeling the namespace to enforce the restricted security policy @ 01/30/26 19:10:33.233
running: "kubectl label --overwrite ns test-operator-system pod-security.kubernetes.io/enforce=restricted"
STEP: installing CRDs @ 01/30/26 19:10:33.399
running: "make install"
STEP: deploying the controller-manager @ 01/30/26 19:10:47.954
running: "make deploy IMG=example.com/test-operator:v0.0.1"
STEP: validating that the controller-manager pod is running as expected @ 01/30/26 19:11:00.311
running: "kubectl get pods -l control-plane=controller-manager -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ \"\\n\" }}{{ end }}{{ end }} -n test-operator-system"
running: "kubectl get pods test-operator-controller-manager-5d47b4fbc8-8wb27 -o jsonpath={.status.phase} -n test-operator-system"
running: "kubectl get pods -l control-plane=controller-manager -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ \"\\n\" }}{{ end }}{{ end }} -n test-operator-system"
running: "kubectl get pods test-operator-controller-manager-5d47b4fbc8-8wb27 -o jsonpath={.status.phase} -n test-operator-system"
running: "kubectl get pods -l control-plane=controller-manager -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ \"\\n\" }}{{ end }}{{ end }} -n test-operator-system"
running: "kubectl get pods test-operator-controller-manager-5d47b4fbc8-8wb27 -o jsonpath={.status.phase} -n test-operator-system"
• [30.363 seconds]
------------------------------
Manager Manager should ensure the metrics endpoint is serving metrics
/workspaces/test-operator/test/e2e/e2e_test.go:176
STEP: creating a ClusterRoleBinding for the service account to allow access to metrics @ 01/30/26 19:11:03.414
running: "kubectl create clusterrolebinding test-operator-metrics-binding --clusterrole=test-operator-metrics-reader --serviceaccount=test-operator-system:test-operator-controller-manager"
STEP: validating that the metrics service is available @ 01/30/26 19:11:03.559
running: "kubectl get service test-operator-controller-manager-metrics-service -n test-operator-system"
STEP: getting the service account token @ 01/30/26 19:11:03.708
STEP: ensuring the controller pod is ready @ 01/30/26 19:11:03.839
running: "kubectl get pod test-operator-controller-manager-5d47b4fbc8-8wb27 -n test-operator-system -o jsonpath={.status.conditions[?(@.type=='Ready')].status}"
running: "kubectl get pod test-operator-controller-manager-5d47b4fbc8-8wb27 -n test-operator-system -o jsonpath={.status.conditions[?(@.type=='Ready')].status}"
running: "kubectl get pod test-operator-controller-manager-5d47b4fbc8-8wb27 -n test-operator-system -o jsonpath={.status.conditions[?(@.type=='Ready')].status}"
running: "kubectl get pod test-operator-controller-manager-5d47b4fbc8-8wb27 -n test-operator-system -o jsonpath={.status.conditions[?(@.type=='Ready')].status}"
running: "kubectl get pod test-operator-controller-manager-5d47b4fbc8-8wb27 -n test-operator-system -o jsonpath={.status.conditions[?(@.type=='Ready')].status}"
running: "kubectl get pod test-operator-controller-manager-5d47b4fbc8-8wb27 -n test-operator-system -o jsonpath={.status.conditions[?(@.type=='Ready')].status}"
running: "kubectl get pod test-operator-controller-manager-5d47b4fbc8-8wb27 -n test-operator-system -o jsonpath={.status.conditions[?(@.type=='Ready')].status}"
running: "kubectl get pod test-operator-controller-manager-5d47b4fbc8-8wb27 -n test-operator-system -o jsonpath={.status.conditions[?(@.type=='Ready')].status}"
running: "kubectl get pod test-operator-controller-manager-5d47b4fbc8-8wb27 -n test-operator-system -o jsonpath={.status.conditions[?(@.type=='Ready')].status}"
STEP: verifying that the controller manager is serving the metrics server @ 01/30/26 19:11:12.977
running: "kubectl logs test-operator-controller-manager-5d47b4fbc8-8wb27 -n test-operator-system"
STEP: creating the curl-metrics pod to access the metrics endpoint @ 01/30/26 19:11:13.184
running: "kubectl run curl-metrics --restart=Never --namespace test-operator-system --image=curlimages/curl:latest --overrides {\n\t\t\t\t\t\"spec\": {\n\t\t\t\t\t\t\"containers\": [{\n\t\t\t\t\t\t\t\"name\": \"curl\",\n\t\t\t\t\t\t\t\"image\": \"curlimages/curl:latest\",\n\t\t\t\t\t\t\t\"command\": [\"/bin/sh\", \"-c\"],\n\t\t\t\t\t\t\t\"args\": [\"curl -v -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IjZHTzJMSGtOdDdpcHpkVnowY2ttcmFtT2RoTDh3YU8wNC1BNjliaHNGekkifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzY5ODAzODYzLCJpYXQiOjE3Njk4MDAyNjMsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiNzU5NjllZDUtZWM4MC00ODJmLTlkMTctYzg3NzU4ZDYwMzZkIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJ0ZXN0LW9wZXJhdG9yLXN5c3RlbSIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJ0ZXN0LW9wZXJhdG9yLWNvbnRyb2xsZXItbWFuYWdlciIsInVpZCI6ImJlYzE5YjcxLTIyOTYtNGI1Mi1hNDI5LThhY2YzMTA0NmE1ZSJ9fSwibmJmIjoxNzY5ODAwMjYzLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6dGVzdC1vcGVyYXRvci1zeXN0ZW06dGVzdC1vcGVyYXRvci1jb250cm9sbGVyLW1hbmFnZXIifQ.U20ttwPDjNq1sqbyzIT5I5Ugw7_fai2KgAJkvylyGIeGXDEtJW-L7ohQv29-7YKDBRri4dRgbOXHaC8drCySMIXZU9xNFLD37o7CGtJC3WJdG03jf3rlfveNU8E60LyUllZpoTzwt6sY8EmIJrllcjumQxFF4aYrblxpvl5rru8GhPDD7NB3CPP0WwZ0-SCGByQnPiYXyaL0OD9sCjaxc3ORwpKqvDQEwyvPJugarJdv-ThIqGTid6ZaWcRoaZgHQJIdzjUxykG7d_auab5R_ipYXitfE-YQ6jI-6wXHsIi_XEHDeH1dPteNfSSSknPZjntw9INlt1ML4Hv-X97sqQ' https://test-operator-controller-manager-metrics-service.test-operator-system.svc.cluster.local:8443/metrics\"],\n\t\t\t\t\t\t\t\"securityContext\": {\n\t\t\t\t\t\t\t\t\"readOnlyRootFilesystem\": true,\n\t\t\t\t\t\t\t\t\"allowPrivilegeEscalation\": false,\n\t\t\t\t\t\t\t\t\"capabilities\": {\n\t\t\t\t\t\t\t\t\t\"drop\": [\"ALL\"]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"runAsNonRoot\": true,\n\t\t\t\t\t\t\t\t\"runAsUser\": 1000,\n\t\t\t\t\t\t\t\t\"seccompProfile\": {\n\t\t\t\t\t\t\t\t\t\"type\": \"RuntimeDefault\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}],\n\t\t\t\t\t\t\"serviceAccountName\": \"test-operator-controller-manager\"\n\t\t\t\t\t}\n\t\t\t\t}"
STEP: waiting for the curl-metrics pod to complete. @ 01/30/26 19:11:13.284
running: "kubectl get pods curl-metrics -o jsonpath={.status.phase} -n test-operator-system"
running: "kubectl get pods curl-metrics -o jsonpath={.status.phase} -n test-operator-system"
running: "kubectl get pods curl-metrics -o jsonpath={.status.phase} -n test-operator-system"
running: "kubectl get pods curl-metrics -o jsonpath={.status.phase} -n test-operator-system"
running: "kubectl get pods curl-metrics -o jsonpath={.status.phase} -n test-operator-system"
running: "kubectl get pods curl-metrics -o jsonpath={.status.phase} -n test-operator-system"
STEP: getting the metrics by checking curl-metrics logs @ 01/30/26 19:11:19.045
STEP: getting the curl-metrics logs @ 01/30/26 19:11:19.046
running: "kubectl logs curl-metrics -n test-operator-system"
STEP: cleaning up the curl pod for metrics @ 01/30/26 19:11:19.172
running: "kubectl delete pod curl-metrics -n test-operator-system"
STEP: undeploying the controller-manager @ 01/30/26 19:11:19.266
running: "make undeploy"
STEP: uninstalling CRDs @ 01/30/26 19:11:25.375
running: "make uninstall"
STEP: removing manager namespace @ 01/30/26 19:11:41.654
running: "kubectl delete ns test-operator-system"
• [38.914 seconds]
------------------------------
[AfterSuite]
/workspaces/test-operator/test/e2e/e2e_suite_test.go:79
STEP: uninstalling CertManager @ 01/30/26 19:11:42.331
running: "kubectl delete -f https://github.com/cert-manager/cert-manager/releases/download/v1.19.2/cert-manager.yaml"
running: "kubectl delete lease cert-manager-cainjector-leader-election -n kube-system --ignore-not-found --force --grace-period=0"
running: "kubectl delete lease cert-manager-controller -n kube-system --ignore-not-found --force --grace-period=0"
[AfterSuite] PASSED [19.090 seconds]
------------------------------
Ran 2 of 2 Specs in 134.508 seconds
SUCCESS! -- 2 Passed | 0 Failed | 0 Pending | 0 Skipped
--- PASS: TestE2E (134.51s)
PASS
ok myoperator/my-operator/test/e2e 134.517s
make cleanup-test-e2e
make[1]: Entering directory '/workspaces/test-operator'
Deleting cluster "test-operator-test-e2e" ...
Deleted nodes: ["test-operator-test-e2e-control-plane"]
make[1]: Leaving directory '/workspaces/test-operator'
E2E tests now support optional context locking via KUBE_CONTEXT environment
variable. When set, tests validate and lock to the specified kubectl context,
preventing context switching during execution.
Features:
Usage:
KUBE_CONTEXT=kind-test make test-e2e
This addresses feedback from PR #5329 about preventing inadvertent context
switching during test execution, while maintaining ease of use (works without
any env vars by default).
Closes: #5335
Co-Author: @Sijoma